perm filename IJCAI.2[W77,JMC] blob
sn#275648 filedate 1977-04-09 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .item←0
C00007 ENDMK
C⊗;
.item←0
This lecture presents three ideas as a possible foundation for
solving the epistemological problems of artificial intelligence. The
ideas can be understood separately, but their plausibility as a basis
for AI work depends somewhat on their interaction.
In each area I have made some progress, but not enough even to be sure
that my formalization is the right one.
Briefly the ideas are the following:
#. Extensionalize intensionality. Treat individual concepts,
general concepts, beliefs, abilities, wishes and goals as objects in a
system of first order logic. It isn't necessary to solve all the problems
of intensionality in any one system, and it may not even be possible.
#. Use concepts meaningful only in approximate theories.
A concept like %2can(person,action)%1 may be required for intelligent
behavior even though it is only meaningful in an approximate theory
and disappears under close analysis.
#. Intelligent behavior requires the ability to jump to conclusions
on insufficient evidence. An important case of this is the ability to
conjecture that the objects in a certain category whose existence follows
from known facts are all of the objects in that category, and we
have some ways of expressing such conjectures as axiom schemata of
first order logic.
Taken together these ideas explain the following phenomenon.
People use certain ideas naively and are usually successful.
Occasionally they get a wrong result, patch the ideas in an ad hoc
manner and are again successful in the particular context. However,
philosophers and logicians have been studying these concepts for
2000 years with poor logical tools and for 70 years with good tools,
and still haven't found general and consistent formulations. Moreover,
it seems likely that correct philosophical theory, when and if found,
will be quite subtle. It seems unlikely that one will be able to
say, %2"Oh, this is what people have been doing all along."%1.
Our tentative
explanation is that our beliefs are correct but too weak
to solve problems with rigorous reasoning. Our ability to jump to
conclusions strengthens them enough to answer particular questions
with the aid of observation but at the risk of error. When an error
is found or even conjectured, the "inductive leaps" can be withdrawn
one at a time. It looks like the same processes can be formalized for
AI purposes with little change from present first order formalisms.
The rest of this paper is intended to
make these somewhat vague ideas more precise.